Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Improved practical Byzantine fault tolerance consensus algorithm based on Raft algorithm
WANG Jindong, LI Qiang
Journal of Computer Applications    2023, 43 (1): 122-129.   DOI: 10.11772/j.issn.1001-9081.2021111996
Abstract723)   HTML34)    PDF (2834KB)(347)       Save
Since Practical Byzantine Fault Tolerance (PBFT) consensus algorithm applied to consortium blockchain has the problems of insufficient scalability and high communication overhead, an improved practical Byzantine fault tolerance consensus algorithm based on Raft algorithm named K-RPBFT (K-medoids Raft based Practical Byzantine Fault Tolerance) was proposed. Firstly, blockchain was sharded based on K-medoids clustering algorithm, all nodes were divided into multiple node clusters and each node cluster constituted to a single shard, so that global consensus was improved to hierarchical multi-center consensus. Secondly, the consus between the cluster central nodes of each shard was performed by adopting PBFT algorithm, and the improved Raft algorithm based on supervision nodes was used for intra-shard consensus. The supervision mechanism in each shard gave a certain ability of Byzantine fault tolerance to Raft algorithm and improved the security of the algorithm. Experimental analysis shows that compared with PBFT algorithm, K-RPBFT algorithm greatly reduces the communication overhead and consensus latency, improves the consensus efficiency and throughput while having Byzantine fault tolerance ability, and has good scalability and dynamics, so that the consortium blockchain can be applied to a wider range of fields.
Reference | Related Articles | Metrics
Multi-person collaborative creation system of building information modeling drawings based on blockchain
SHEN Yumin, WANG Jinlong, HU Diankai, LIU Xingyu
Journal of Computer Applications    2021, 41 (8): 2338-2345.   DOI: 10.11772/j.issn.1001-9081.2020101549
Abstract436)      PDF (1810KB)(412)       Save
Multi-person collaborative creation of Building Information Modeling (BIM) drawings is very important in large building projects. However, the existing methods of multi-person collaborative creation of BIM drawings based on Revit and other modeling software or cloud service have the confusion of BIM drawing version, difficulty of traceability, data security risks and other problems. To solve these problems, a blockchain-based multi-person collaborative creation system for BIM drawings was designed. By using the on-chain and off-chain collaborative storage method, the blockchain and database were used to store BIM drawings information after each creation in the BIM drawing creation process and the complete BIM drawings separately. The decentralization, traceability and anti-tampering characteristics of the blockchain were used to ensure that the version of the BIM drawings is clear, and provide a basis for the future copyright division. These characteristics were also used to enhance the data security of BIM drawings information. Experimental results show that the average block generation time of the proposed system in the multi-user concurrent case is 0.467 85 s, and the maximum processing rate of the system is 1 568 transactions per second, which prove the reliability of the system and that the system can meet the needs of actual application scenarios.
Reference | Related Articles | Metrics
Image description generation algorithm based on improved attention mechanism
LI Wenhui, ZENG Shangyou, WANG Jinjin
Journal of Computer Applications    2021, 41 (5): 1262-1267.   DOI: 10.11772/j.issn.1001-9081.2020071078
Abstract496)      PDF (1413KB)(903)       Save
Image description is to express the global information contained in the image in sentences. It requires that the image description generation model can extract image information and express the extracted image information in sentences. The traditional model is based on Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), which can realize the function of image-to-sentence translation to a certain extent. However, this model has low accuracy and training speed when extracting key information of the image. To solve this problem, an improved attention mechanism image description generation model based on CNN and Long Short-Term Memory (LSTM) network was proposed. VGG19 and ResNet101 were used as the feature extraction networks, and group convolution was introduced into the attention mechanism to replace the traditional fully connected operation, so as to improve the evaluation indices.The model was trained by public datasets Flickr8K and Flickr30K and validated by various evaluation indices (BLEU(Bilingual Evaluation Understudy), ROUGE_L(Recall-Oriented Understudy for Gisting Evaluation), CIDEr(Consensus-based Image Description Evaluation), METEOR(Metric for Evaluation of Translation with Explicit Ordering)). Experimental results show that compared with the model with traditional attention mechanism, the proposed improved image description generation model with attention mechanism improves the accuracy of the image description task, and this model is better than the traditional model on all the four evaluation indices.
Reference | Related Articles | Metrics
Vein recognition algorithm based on Siamese nonnegative matrix factorization with transferability
WANG Jinkai, JIA Xu
Journal of Computer Applications    2021, 41 (3): 898-903.   DOI: 10.11772/j.issn.1001-9081.2020060965
Abstract313)      PDF (944KB)(427)       Save
Concerning the problem that the recognition algorithm which was obtained under one vein image dataset is lack of universality to other datasets, a Siamese Nonnegative Matrix Factorization (NMF) model with transferability was proposed. Firstly, the supervised learning for the vein images with same labels in the source dataset was achieved by using two NMF models with the same structures and the parameter sharing. Then, the vein feature differences between two different datasets were reduced through using maximum mean discrepancy constraint, that is to transfer the knowledge in the source dataset to the target dataset. Finally, the matching of vein images was realized based on cosine distance. Experimental results show that, the proposed recognition algorithm can not only achieve the high recognition accuracy on the source dataset, but also respectively reduce the average False Accept Rate (FAR) and average False Reject Rate (FRR) to 0.043 and 0.055 on the target dataset when using only a small number of vein images in the target dataset. In addition, the average recognition time of the proposed algorithm is 0.56 seconds, which can meet the real-time requirement of recognition.
Reference | Related Articles | Metrics
Interactive water flow heating simulation based on smoothed particle hydrodynamics method
WANG Jiangkun, HE Kunjin, CAO Hongfei, WANG Jinqiang, ZHANG Yan
Journal of Computer Applications    2020, 40 (5): 1409-1414.   DOI: 10.11772/j.issn.1001-9081.2019101734
Abstract352)      PDF (2338KB)(434)       Save

To solve the problems of interaction difficulty and low efficiency in traditional water flow heating simulation, a method about thermal motion simulation based on Smoothed Particle Hydrodynamics (SPH) was proposed to control the process of water flow heating interactively. Firstly, the continuous water flow was transformed into particles based on the SPH method, the particle group was used to simulate the movement of the water flow, and the particle motion was limited in the container by the collision detection method. Then, the water particles were heated by the heat conduction model of the Dirichlet boundary condition, and the motion state of the particles was updated according to the temperature of the particles in order to simulate the thermal motion of the water flow during the heating process. Finally, the editable system parameters and constraint relationships were defined, and the heating and motion processes of water flow under multiple conditions were simulated by human-computer interaction. Taking the heating simulation of solar water heater as an example, the interactivity and efficiency of the SPH method in solving the heat conduction problem were verified by modifying a few parameters to control the heating work of the water heater, which provides convenience for the applications of interactive water flow heating in other virtual scenes.

Reference | Related Articles | Metrics
Vehicle face recognition algorithm based on NMF with weighted and orthogonal constraints
WANG Jinkai, JIA Xu
Journal of Computer Applications    2020, 40 (4): 1050-1055.   DOI: 10.11772/j.issn.1001-9081.2019081338
Abstract436)      PDF (968KB)(344)       Save
Facing with multi-category samples with limited number of annotations,in order to improve vehicle face recognition accuracy,a vehicle face recognition algorithm based on improved Nonnegative Matrix Factorization(NMF)was proposed. Firstly,the shape feature of local region of vehicle face image was extracted by Histogram of Oriented Gradients (HOG)operator,which was used as the original feature of vehicle face image. Then,the NMF model with multiple weights, orthogonality and sparse constraints was proposed,based on which,the feature bases describing the vehicle face image key regions were acquired,and the feature dimension reduction was achieved. Finally,the discrete cosine distance was used to calculate the similarity between features,and it was able to be concluded that whether the vehicle face images were matched or not. Experimental results show that the proposed recognition algorithm can obtain good recognition effect with accuracy of 97. 68% on the established vehicle face image dataset,at the same time,the proposed algorithm can meet the real-time requirement.
Reference | Related Articles | Metrics
Time series trend prediction at multiple time scales
WANG Jince, DENG Yueping, SHI Ming, ZHOU Yunfei
Journal of Computer Applications    2019, 39 (4): 1046-1052.   DOI: 10.11772/j.issn.1001-9081.2018091882
Abstract1307)      PDF (983KB)(437)       Save
A time series trend prediction algorithm at multiple time scales based on novel feature model was proposed to solve the trend prediction problem of stock and fund time series data. Firstly, a feature tree with multiple time scales of features was extracted from original time series, which described time series with the characteristics of the series in each level and relationship between levels. Then, the hidden states in feature sequences were extracted by clustering. Finally, a Multiple Time Scaled Trend Prediction Algorithm (MTSTPA) was designed by using Hidden Markov Model (HMM) to simultaneously predict the trend and length of the trends at different scales. In the experiments on real stock datasets, the prediction accuracy at every scale are more than 60%. Compared with the algorithm without using feature tree, the model using the feature tree is more efficient, and the accuracy is up to 10 percentage points higher at a certain scale. At the same time, compared with the classical Auto-Regressive Moving Average (ARMA) model and pattern-based Hidden Markov Model (PHMM), MTSTPA performs better, verifying the validity of MTSTPA.
Reference | Related Articles | Metrics
Business data security of system wide information management based on content mining
MA Lan, WANG Jingjie, CHEN Huan
Journal of Computer Applications    2019, 39 (2): 488-493.   DOI: 10.11772/j.issn.1001-9081.2018071449
Abstract437)      PDF (1015KB)(283)       Save
Considering the data security problems of service sharing in SWIM (System Wide Information Management), the risks in the SWIM business process were analyzed, and a malicious data filtering method based on Latent Dirichlet Allocation (LDA) topic model and content mining was proposed. Firstly, big data analysis was performed on four kinds of SWIM business data, then LDA model was used for feature extraction of business data to realize content mining. Finally, the pattern string was searched in the main string by using KMP (Knuth-Morris-Pratt) matching algorithm to detect SWIM business data containing malicious keywords. The proposed method was tested in the Linux kernel. The experimental results show that the proposed method can effectively mine the content of SWIM business data and has better detection performance than other methods.
Reference | Related Articles | Metrics
Revocable identity-based encryption scheme with outsourcing decryption and member revocation
WANG Zhanjun, MA Haiying, WANG Jinhua, LI Yan
Journal of Computer Applications    2019, 39 (12): 3563-3568.   DOI: 10.11772/j.issn.1001-9081.2019071215
Abstract361)      PDF (900KB)(224)       Save
For the drawbacks of low key updating efficiency and high decryption cost of the Revocable Identity-Based Encryption (RIBE), which make it unsuitable for lightweight devices, an RIBE with Outsourcing Decryption and member revocation (RIBE-OD) was proposed. Firstly, a full binary tree was created and a random one-degree polynomial was picked for each node of this tree. Then, the one-degree polynomial was used to create the private keys of all the users and the update keys of the unrevoked users by combining the IBE scheme based on exponential inverse model and the full subtree method, and the revoked users' decryption abilities were deprived due to not obtaining their update keys. Next, the majority of decryption calculation was securely outsourced to cloud servers after modifying the private key generation algorithm by the outsourcing decryption technique and adding the ciphertext transformation algorithm. The lightweight devices were able to decrypt the ciphertexts by only performing a little simple computation. Finally, the proposed scheme was proved to be secure based on the Decisional Bilinear Diffie-Hellman Inversion (DBDHI) assumption. Compared with Boldyreva-Goyal-Kumar (BGK) scheme, the proposed scheme not only improves the efficiency of key updating by 85.7%, but also reduces the decryption cost of lightweight devices to an exponential operation of elliptic curve, so it is suitable for lightweight devices to decrypt ciphertexts.
Reference | Related Articles | Metrics
Design of experience-replay module with high performance
CHEN Bo, WANG Jinyan
Journal of Computer Applications    2019, 39 (11): 3242-3249.   DOI: 10.11772/j.issn.1001-9081.2019050810
Abstract555)      PDF (1237KB)(302)       Save
Concerning the problem that a straightforward implementation of the experience-replay procedure based on python data-structures may lead to a performance bottleneck in Deep Q Network (DQN) related applications, a design scheme of a universal experience-replay module was proposed to provide high performance. The proposed module consists of two software layers. One of them, called the "kernel", was written in C++, to implement fundamental functions for experience-replay, achieving a high execution efficiency. And the other layer "wrapper", written in python, encapsulated the module function and provided the call interface in an object-oriented style, guaranteeing the usability. For the critical operations in experience-replay, the software structure and algorithms were well researched and designed. The measures include implementing the priority replay mechanism as an accessorial part of the main module with logical separation, bringing forward the samples' verification of "get_batch" to the "record" operation, using efficient strategies and algorithms in eliminating samples, and so on. With such measures, the proposed module is universal and extendible. The experimental results show that the execution efficiency of the experience-replay process is well optimized by using the proposed module, and the two critical operations, the "record" and the "get_batch", can be executed efficiently. The proposed module operates the "get_batch" about 100 times faster compared with the straightforward implementation based on python data-structures. Therefore, the experience-replay process is no longer a performance bottleneck in the system, meeting the requirements of various kinds of DQN-related applications.
Reference | Related Articles | Metrics
Sentiment analysis of entity aspects based on multi-attention long short-term memory
ZHI Shuting, LI Xiaoge, WANG Jingbo, WANG Penghua
Journal of Computer Applications    2019, 39 (1): 160-167.   DOI: 10.11772/j.issn.1001-9081.2018061232
Abstract519)      PDF (1273KB)(329)       Save
Aspect sentiment analysis is a fine-grained task in sentiment classification. Concerning the problem that traditional neural network model can not accurately construct sentiment features of aspects, a Long Short-Term Memory with Multi-ATTention and Aspect Context (LSTM-MATT-AC) neural network model was proposed. Different types of attention mechanisms were added in different positions of bidirectional Long Short-Term Memory (LSTM), and the advantage of multi-attention mechanism was fully utilized to allow the model to focus on sentiment information of specific aspects in sentence from different perspectives, which could compensate the deficiency of single attention mechanism. At the same time, combining aspect context information of bidirectional LSTM independent coding, the model could capture deeper level sentiment information and effectively distinguish sentiment polarity of different aspects. Experiments on SemEval2014 Task4 and Twitter datasets were carried out to verify the effectiveness of different attention mechanisms and independent context processing on aspect sentiment analysis. The experimental results show that the accuracy of the proposed model reaches 80.6%, 75.1% and 71.1% respectively for datasets in domain Restaurant, Laptop and Twitter. Compared with previous neural network-based sentiment analysis models, the accuracy has been further improved.
Reference | Related Articles | Metrics
Forecasting model of pollen concentration based on particle swarm optimization and support vector machine
ZHAO Wenfang, WANG Jingli, SHANG Min, LIU Yanan
Journal of Computer Applications    2019, 39 (1): 98-104.   DOI: 10.11772/j.issn.1001-9081.2018071626
Abstract621)      PDF (1158KB)(338)       Save
To improve the accuracy of pollen concentration forecast and resolve low accuracy of current pollen concentration forecast model, a model for daily pollen concentration forecasting based on Particle Swarm Optimization (PSO) algorithm and Support Vector Machine (SVM) was proposed. Firstly, the feature vector extraction was carried out by using correlation analysis technique to select meteorological data with strong correlation with pollen concentration, such as temperature, daily temperature difference, relative humidity, precipitation, wind, sunshine hours. Secondly, an SVM prediction model based on this vector and pollen concentration observation data was established. The PSO algorithm was designed to optimize the parameters in SVM algorithm, and then the optimal parameters were used to construct daily pollen concentration prediction model. Finally, the forecast of pollen concentration in 24 hours in advance was made by using the optimized SVM model. The comparison among the accuracy of the optimized SVM model, Multiple Linear Regression (MLR) model and Back Propagation Neural Network (BPNN) model was performed to evaluate their performances. In addition, the optimized model was also applied for the forecast of pollen concentration in 24 hours in advance at Nanjiao and Miyun meteorological observation stations. The experimental results show that the proposed method performs better than MLR and BPNN methods. Meanwhile, it also provides promising results for forecast of pollen concentration in 24 hours in advance and also has good generalization ability.
Reference | Related Articles | Metrics
Satellite scheduling method for intensive tasks based on improved fireworks algorithm
ZHANG Ming, WANG Jindong, WEI Bo
Journal of Computer Applications    2018, 38 (9): 2712-2719.   DOI: 10.11772/j.issn.1001-9081.2018030547
Abstract550)      PDF (1302KB)(308)       Save
Traditional satellite scheduling models are generally simple, when the problem is large and the tasks are concentrated, the disadvantages of mutual exclusion between tasks and low task revenue often occur. To solve this problem, an intensive task imaging satellite scheduling method based on Improved FireWorks Algorithm (IFWA) was proposed. On the basis of analyzing the characteristics of intensive task processing and imaging satellite observation, synthetic constraint analysis on the tasks was firstly carried out, and then a multi-satellite intensive task scheduling Constraint Satisfaction Problem (CSP) model based on task synthesis was established by comprehensively considering the constraints such as the observable time of the imaging satellite, the attitude adjustment time between tasks, the energy and capacity of the imaging satellite method. Finally, an improved fireworks algorithm was used to solve the model, elitist selection strategy was used to ensure the diversity of population and accelerate the convergence of the algorithm, thus a better satellite scheduling scheme was obtained. The simulation results show that the proposed model increases the average revenue by 30% to 35% and improves the time efficiency by 32% to 45% compared with the scheduling model without consideration of task synthesis factor, which validates its feasibility and effectiveness.
Reference | Related Articles | Metrics
Data plane fast forwarding of collaborative caching for software defined networking
ZHU Xiaodong, WANG Jinlin, WANG Lingfang
Journal of Computer Applications    2018, 38 (8): 2343-2347.   DOI: 10.11772/j.issn.1001-9081.2018010088
Abstract676)      PDF (886KB)(410)       Save
When using the in-network nodes with cache ability for collaborative caching, the packets need to be forward quickly according to the surrounding caching status. A new data-plane-fast-forwarding method was proposed for this problem. Two bloom filters were kept for each port in the switch to maintain the surrounding caching status at the data plane. Meanwhile, the action of protocol oblivious forwarding was also extended. The extended action searched the bloom filters directly, and the optimized forwarding process was used to forward packets according to the searching results, then the packets were forwarded quickly based on the surrounding caching status. The evaluation results show that the caching status maintained by the controller reaches the forwarding performance bottleneck when the input rate is 80 Kb/s. The packets can be forwarded at line speed when the input rate is 111 Mb/s by using the data-plane-fast-forwarding method, which efficiency of forwarding is superior to the output action of protocol oblivious forwarding. The memory overhead of maintaining caching status by using the bloom filter is up to 20% of that by using the flow table. In Software Defined Networking (SDN) with cache ability, the proposed method can maintain the surrounding caching status at the data plane and promote the efficiency of forwarding packets by the surrounding caching status for collaborative caching.
Reference | Related Articles | Metrics
SIR rumor propagation model on dynamic homogeneity network
FU Wei, WANG Jing, PAN Xiaozhong, LIU Yazhou
Journal of Computer Applications    2018, 38 (7): 1951-1955.   DOI: 10.11772/j.issn.1001-9081.2018010132
Abstract1175)      PDF (933KB)(485)       Save
To solve the problem that infected nodes move out of the system during the process of rumor propagation, a new SIR (Susceptible-Infective-Removal) rumor propagation model on dynamic homogeneity network was proposed by improving the normalization conditions of classical SIR rumor propagation model. Firstly, according to the propagation rules and mean field theory, the rumor propagation dynamics equation was established on the homogenous network. Then the steady state and infection peak of the rumor propagation process were theoretically analyzed. Finally, the influence of factors on rumor propagation was studied through numerical simulation, which including infection rate, immune rate, real immune coefficient and average degree of network. Research indicates that, as infected nodes move out of the system, steady state value decreases and infection peak slightly increases, compared with the classical SIR rumor propagation model. The study also shows that the peak value of rumor infection increases as the infection probability increases and the immune probability decreases. As the real immune coefficient increases, the steady state value of immune nodes increases. The network average degree has no influence on the steady state of rumor propagation. The larger the average degree is, the earlier the arrival time of the infection peak. This research expands the application scope of SIR propagation model from a closed system to a non-closed system, providing guidance theory and numerical support for making rumor prevention measures.
Reference | Related Articles | Metrics
Evolutionary game considering node degree in social network
LIU Yazhou, WANG Jing, PAN Xiaozhong, FU Wei
Journal of Computer Applications    2018, 38 (4): 1029-1035.   DOI: 10.11772/j.issn.1001-9081.2017102431
Abstract372)      PDF (986KB)(459)       Save
In the process of rumor spreading, nodes with different degrees of recognition have different recognition abilities. A evolution model of dynamic complex network was proposed based on game theory, in which a new game gain was defined according to node degree. In this model, considering the fact that rumor propagation was often related to node interests, the non-uniform propagation rates of different nodes and propagation dynamics of rumors were described by introducing the recognition ability, and two rumor suppression strategies were proposed. The simulation were conducted on two typical network models and verified in the Facebook real network data. The research demonstrates that the fuzzy degree of rumor has little effect on the rumor propagation rate and the time required to reach steady state in BA scale-free network and Facebook network. As rumors are increasingly fuzzy, the scope of rumor in the network is expanding. Compared with Watts-Strogtz (WS) small-world network, rumors spread more easily in BA scale-free network and Facebook network. The study also finds out that immune nodes in the WS small-world network grow more rapidly than immune nodes in BA scale-free network and Facebook network with the same added value of immune benefits. In addition, there is a better rumor suppression effect by suppressing the degree of node hazard than by suppressing the game gain.
Reference | Related Articles | Metrics
Path planning algorithm of mobile robot based on particle swarm optimization
HAN Ming, LIU Jiaomin, WU Shuomei, WANG Jingtao
Journal of Computer Applications    2017, 37 (8): 2258-2263.   DOI: 10.11772/j.issn.1001-9081.2017.08.2258
Abstract641)      PDF (939KB)(674)       Save
Concerning the slow convergence and local optimum of the traditional robot path planning algorithms in complicated enviroment, a new path planning algorithm for mobile robots based on Particle Swarm Optimization (PSO)algorithm in repulsion potential field was proposed. Firstly, the grid method was used to give a preliminary path planning of robot, which was regarded as the initial particle population. The size of grids was determined by the obstacles of different shapes and sizes and the total area of obstacles in the map, then mathematical modeling of the planning path was completed. Secondly, the particle position and speed were constantly updated through the cooperation between particles. Finally, the high-security fitness function was constructed using the repulsion potential field of obstacles to obtain an optimal path from starting point to target of robot. Simulation experiment was carried out with Matlab. The experimental results show that the proposed algorithm can implement path optimization and safely avoid obstacles in a complex environment; the contrast experimental results indicat that the proposed algorithm converges fast and can solve the local optimum problem.
Reference | Related Articles | Metrics
Multi-Agent-based real-time self-adaptive discrimination method
WANG Jing, WANG Chunmei, ZHI Jia, YANG Jiasen, CHEN Tuo
Journal of Computer Applications    2017, 37 (7): 2034-2038.   DOI: 10.11772/j.issn.1001-9081.2017.07.2034
Abstract691)      PDF (988KB)(398)       Save
Concerning the problem that existing data discrimination methods can not adapt to changeable test environment and realize continuous real-time discriminating process with low error rate when applied in ground integrated test of payload, a Multi-Agent-based Real-time self-Adaptive Discrimination (MARAD) method was proposed. Firstly, based on the design principle of "sensing-decision-execution", four Agents which had own tasks but also interact and cooperate with each other were adopted in order to adapt the changeable test situation. Secondly, an activity-oriented model was constructed, and the C Language Integrated Production System (CLIPS) was used as an inference engine to make the discrimination rules independent of test sequences and assure the continuity of discrimination. Finally, fault-tolerant mechanism was introduced to the discrimination rules to decrease fault positive rate without changing the correctness. With the same test data, compared with the state modeling method with the average result of three times after discriminating, MARAD method has the same parameter missing rate 0% but decreases the activity false-positive rate by 10.54 percentage points; compared with the manual method, MARAD method decreases the parameter missing rate by 5.97 percentage points and activity false-positive rate by 3.02 percentage points, and no person is needed to participate in the discrimination. The proposed method can effectively improve the environment self-adaptability, real-time discriminating continuity and correctness of the system.
Reference | Related Articles | Metrics
Improved algorithm of artificial bee colony based on Spark
ZHAI Guangming, LI Guohe, WU Weijiang, HONG Yunfeng, ZHOU Xiaoming, WANG Jing
Journal of Computer Applications    2017, 37 (7): 1906-1910.   DOI: 10.11772/j.issn.1001-9081.2017.07.1906
Abstract533)      PDF (766KB)(486)       Save
To combat low efficiency of Artificial Bee Colony (ABC) algorithm on solving combinatorial problem, a parallel ABC optimization algorithm based on Spark was presented. Firstly, the bee colony was divided into subgroups among which broadcast was used to transmit data, and then was constructed as a resilient distributed dataset. Secondly, a series of transformation operators were used to achieve the parallelization of the solution search. Finally, gravitational mass calculation was used to replace the roulette probability selection and reduce the time complexity. The simulation results in solving the Traveling Salesman Problem (TSP) prove the feasibility of the proposed parallel algorithm. The experimental results show that the proposed algorithm provides a 3.24x speedup over the standard ABC algorithm and its convergence speed is increased by about 10% compared with the unimproved parallel ABC algorithm. It has significant advantages in solving high dimensional problems.
Reference | Related Articles | Metrics
Integrated indoor positioning algorithm based on D-S evidence theory
WANG Xuqiao, WANG Jinkun
Journal of Computer Applications    2017, 37 (4): 1198-1201.   DOI: 10.11772/j.issn.1001-9081.2017.04.1198
Abstract513)      PDF (762KB)(506)       Save
An integrated positioning algorithm for Wireless Fidelity / Inertial Measurement Unit (WiFi/IMU) based on D-S evidence inference theory was proposed for large indoor area Location Based Service (LBS) without beacons deployment. Firstly, the transmission model of signal strength of a single Access Point (AP) was established, then Kalman Filter was used to denoise the Received Signal Strength Indication (RSSI). Secondly, Dempster/Shafer (D-S) evidence theory was applied in the data fusion process for real-time acquisition of multi-sources, including the signal strength of WiFi, yaw and accelerations on all shafts; then the fingerprint blocks with high confidence were selected. Finally, the Weighted K-Nearest Neighbor (WKNN) method was exploited for the terminal position estimation. Numerical simulations on unit area show that the maximum error is 2.36 m and the mean error is 1.27 m, which proves the viability and effectiveness of the proposed algorithm; the cumulated error probability is 88.20% when the distance is no greater than the typical numerical value, which is superior to 70.82% of C-Support Vector Regression (C-SVR) or 67.85% of Pedestrian Dead Reckoning (PDR). Furthermore, experiments on the whole area of the real environment also show that the proposed algorithm has an excellent environmental applicability.
Reference | Related Articles | Metrics
Incremental learning algorithm based on graph regularized non-negative matrix factorization with sparseness constraints
WANG Jintao, CAO Yudong, SUN Fuming
Journal of Computer Applications    2017, 37 (4): 1071-1074.   DOI: 10.11772/j.issn.1001-9081.2017.04.1071
Abstract542)      PDF (632KB)(587)       Save
Focusing on the issues that the sparseness of the data obtained after Non-negative Matrix Factorization (NMF) is reduced and the computing scale increases rapidly with the increasing of training samples, an incremental learning algorithm based on graph regularized non-negative matrix factorization with sparseness constraints was proposed. It not only considered the geometric structure in the data representation, but also introduced sparseness constraints to coefficient matrix and combined them with incremental learning. Using the results of previous factorization involved in iterative computation with sparseness constraints and graph regularization, the cost of the computation was reduced and the sparseness of data after factorization was highly improved. Experiments on both ORL and PIE face recognition databases demonstrate the effectiveness of the proposed method.
Reference | Related Articles | Metrics
Range query authentication for outsourced spatial databases
HU Xiaoyan, WANG Jingyu, LI Hairong
Journal of Computer Applications    2017, 37 (4): 1021-1025.   DOI: 10.11772/j.issn.1001-9081.2017.04.1021
Abstract601)      PDF (904KB)(482)       Save
In existing spatial range query authenticating methods such as VR-tree and MR-tree, the transmission cost of the server to the client is high and the verification efficiency of the client is low because the Verification Object (VO) contains too much authentication information. To resolve these problems, a new index structure MGR-tree was proposed. First of all, by means of embedding a R-tree in each leaf node of Grid-tree, the size of VO decreased, and the efficiency of query and authentication was improved. In addition, an optimal index MHGR-tree which takes advantage of the property of Hilbert curve and a filter policy were proposed to accelerate the verification. Experimental results show that the proposed method has a better performance compared with MR-tree. In the best case, the verification object size and authentication time of MHGR are 63% and 19% of MR respectively.
Reference | Related Articles | Metrics
Ciphertext policy attribute-based encryption scheme with weighted attribute revocation
WANG Jingwei, YIN Xinchun
Journal of Computer Applications    2017, 37 (12): 3423-3429.   DOI: 10.11772/j.issn.1001-9081.2017.12.3423
Abstract500)      PDF (1079KB)(706)       Save
Most of the existing Ciphertext-Policy Attribute-Based Encryption (CP-ABE) schemes cannot support multi-state representation of attributes, and the computation overhead of encryption and decryption phase is huge. In order to solve the problems, a CP-ABE scheme with Weighted Attribute Revocation (CPABEWAR) was proposed. On the one hand, the expression ability of attribute was improved by introducing the concept of weighted attribute. On the other hand, in order to reduce the computation cost, part calculation tasks were outsourced to Cloud Service Provider (CSP) under the premise of ensuring data securer. The analysis results show that, the proposed CPABEWAR is proved to be Chosen Plaintext Secure (CPS) under the Decisional Bilinear Diffie-Hellman (DBDH) assumption. The proposed scheme simplifies the access tree structure at the cost of a small amount of storage space and improves system efficiency and flexibility of access control, which is suitable for cloud users with limited computing power.
Reference | Related Articles | Metrics
Foreground extraction with genetic mechanism and difference of Guassian
CHEN Kaixing, LIU Yun, WANG Jinhai, YUAN Yubo
Journal of Computer Applications    2017, 37 (11): 3231-3237.   DOI: 10.11772/j.issn.1001-9081.2017.11.3231
Abstract480)      PDF (1023KB)(398)       Save
Aiming at the difficult problem of unsupervised or automatic foreground extraction, an automatic foreground extraction method based on genetic mechanism and difference of Gaussian, named GFO, was proposed. Firstly, Gaussian variation was used to extract the relative important regions in the image, which were defined as candidate seed foregrounds. Secondly, based on the edge information of the original image and the candidate seed foregrounds, the contour of foreground object contour was generated according to connectivity and convex sphere principle, called star convex contour. Thirdly, the adaptive function was constructed, the seed foreground was selected, and the genetic mechanism of selection, crossover and mutation was used to obtain the accurate and valid final foreground. The experimental results on the Achanta database and multiple videos show that the performance of the GFO method is superior to the existing automatic foreground extraction based on difference of Gaussian (FMDOG) method, and have achieved a good extraction effect in recognition accuracy, recall rate and F β index.
Reference | Related Articles | Metrics
Modeling and simulating thermotaxis behavior of Caenorhabditis elegans based on artificial neural network
LI Mingxu, DENG Xin, WANG Jin, WANG Xiao, ZHANG Xiaomou
Journal of Computer Applications    2016, 36 (7): 1909-1913.   DOI: 10.11772/j.issn.1001-9081.2016.07.1909
Abstract636)      PDF (771KB)(412)       Save
To research the thermal behavior of Caenorhabditis elegans (C.elegans), a new method was proposed to model and simulate the thermal behavior of C.elegans based on the artificial neural network. Firstly, the motion model of the nematode was established. Then, a nonlinear function was designed to approximate the movement logic of the thermotaxis of the nematode. Thirdly, the speed and the orientation change capabilities were implemented, and these capabilities had been realized by the artificial neural network. Finally, the experimental simulation was carried out in the Matlab environment, and the thermal behavior of the nematode was simulated. The experimental results show that Back Propagation (BP) neural network can simulate the thermal behavior of C.elegans better than Radical Basis Function (RBF) neural network. The experimental results also demonstrate that the proposed method can successfully model the thermal behavior of C.elegans, and reveal the essence of the thermotaxis of C.elegans to some extent, which theoretically supports the research on the thermotaxis of the crawling robot.
Reference | Related Articles | Metrics
Wear-leveling algorithm for NAND flash memory based on separation of hot and cold logic pages
WANG Jinyang, YAN Hua
Journal of Computer Applications    2016, 36 (5): 1430-1433.   DOI: 10.11772/j.issn.1001-9081.2016.05.1430
Abstract372)      PDF (671KB)(468)       Save
According to the problem of the existing garbage collection algorithm for NAND flash memory, an efficient algorithm, called AWGC (Age With Garbage Collection), was presented to improve wear leveling of NAND flash memory. A hybrid policy with the age of invalid page, erase count of physical blocks and the update frequency of physical blocks were redefined to select the returnable block. Meanwhile, a new heat calculation method logic pages was deduced, and cold-hot separating of valid pages in returnable block was conducted. Compared with the GReedy (GR) algorithm, Cost-Benefit (CB) algorithm, Cost-Age-Time (CAT) algorithm and File-aware Garbage Collection (FaGC) algorithm, not only some good results in wear leveling have been got, but also the total numbers of erase and copy operations have significantly been reduced.
Reference | Related Articles | Metrics
Software reliability prediction model based on grey Elman neural network
CAO Weidong, ZHU Yuanzhi, ZHAI Panpan, WANG Jing
Journal of Computer Applications    2016, 36 (12): 3481-3485.   DOI: 10.11772/j.issn.1001-9081.2016.12.3481
Abstract569)      PDF (756KB)(422)       Save
The current software reliability prediction model has big prediction accuracy fluctuation and poor adaptability in field data of reliability with strong randomness and dynamics. In order to solve the problems, a software reliability prediction model based on grey Elman neural network was proposed. First, the grey GM (1,1) model was used to predict the failure data and weaken its randomness. Then the Elman neural network was utilized to build the model for predicting the residual produced by GM (1,1), and catch the dynamic change rules. Finally, the prediction results of GM (1,1) and Elman neural network residual were combined to get the final prediction outcomes. The simulation experiment was conducted by using field failure data set produced by the flight inquiry system. The gray Elman neural network model was compared with Back-Propagation (BP) neural network model and Elman neural network model, the corresponding Mean Squared Error (MSE) and Mean Relative Error (MRE) of the three models were respectively 105.1, 270.9, 207.5 and 0.0011, 0.0021, 0.0016. The errors of gray Elman neural network prediction model were the minimum. The experimental results show that the proposed gray Elman neural network prediction model has higher prediction accuracy.
Reference | Related Articles | Metrics
Fast high average-utility itemset mining algorithm based on utility-list structure
WANG Jinghua, LUO Xiangzhou, WU Qian
Journal of Computer Applications    2016, 36 (11): 3062-3066.   DOI: 10.11772/j.issn.1001-9081.2016.11.3062
Abstract534)      PDF (722KB)(465)       Save
In the field of data mining, high utility itemset mining has been widely studied. However, high utility itemset mining does not consider the effect of the itemset length. To address this issue, high average-utility itemset mining has been proposed. At present, the proposed high average utility itemset mining algorithms take a lot of time to dig out the high average-utility itemset. To solve this problem, an improved high average itemset mining algorithm, named FHAUI (Fast High Average Utility Itemset), was proposed. FHAUI stored the utility information in the utility-list and mined all the high average-utility itemsets from the utility-list structure. At the same time, FHAUI adopted a two-dimensional matrix to effectively reduce the number of join-operations. Finally, the experimental results on several classical datasets show that FHAUI has greatly reduced the number of join-operations, and reduced its cost in time consumption.
Reference | Related Articles | Metrics
Fully secure hierarchical identity-based online/offline encryption
WANG Zhanjun, MA Haiying, WANG Jinhua
Journal of Computer Applications    2015, 35 (9): 2522-2526.   DOI: 10.11772/j.issn.1001-9081.2015.09.2522
Abstract623)      PDF (921KB)(287)       Save
Since the encryption algorithm of Hierarchical Identity-Based Encryption (HIBE) is unsuitable for the lightweight devices, a fully secure Hierarchical Identity-Based Online/Offline Encryption (HIBOOE) scheme was proposed. This scheme introduced the online/offline cryptology into HIBE, and divided the encryption algorithm into two stages. Firstly, the offline encryption preprocessed most of heavy computations before knowing the message and the recipient, then the online encryption could be performed efficiently to produce the ciphertext once the recipient's identity and the message were got. The experiment results show that the proposed scheme greatly improves the encryption efficiency, and gets suitable for power-constrained devices. Moreover it is proven fully secure.
Reference | Related Articles | Metrics
Hybrid sampling extreme learning machine for sequential imbalanced data
MAO Wentao, WANG Jinwan, HE Ling, YUAN Peiyan
Journal of Computer Applications    2015, 35 (8): 2221-2226.   DOI: 10.11772/j.issn.1001-9081.2015.08.2221
Abstract480)      PDF (882KB)(379)       Save

Many traditional machine learning methods tend to get biased classifier which leads to lower classification precision for minor class in sequential imbalanced data. To improve the classification accuracy of minor class, a new hybrid sampling online extreme learning machine on sequential imbalanced data was proposed. This algorithm could improve the classification accuracy of minor class as well as reduce the loss of classification accuracy of major class, which contained two stages. In offline stage, the principal curve was introduced to model the confidence regions of minor class and major class respectively based on the strategy of balanced samples. Over-sampling of minority and under-sampling of majority was achieved based on confidence region. Then the initial model was established. In online stage, only the most valuable samples of major class were chosen according to the sample importance, and then the network weight was updated dynamically. The proposed algorithm had upper bound of the information loss through the theoretical proof. The experiment was taken on two UCI datasets and the real-world air pollutant forecasting dataset of Macao. The experimental results show that, compared with the existing methods such as Online Sequential Extreme Learning Machine (OS-ELM), Extreme Learning Machine (ELM) and Meta-Cognitive Online Sequential Extreme Learning Machine (MCOS-ELM), the proposed method has higher prediction precision and better numerical stability.

Reference | Related Articles | Metrics